181 research outputs found

    Automatic light source placement for maximum visual information recovery

    Get PDF
    The definitive version is available at http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2007.00944.x/abstractThe automatic selection of good viewing parameters is a very complex problem. In most cases, the notion of good strongly depends on the concrete application. Moreover, when an intuitive definition of good view is available, it is often difficult to establish a measure that brings it to the practice. Commonly, two kinds of viewing parameters must be set: camera parameters (position and orientation) and lighting parameters (number of light sources, its position and eventually the orientation of the spot). The first ones will determine how much of the geometry can be captured and the latter will influence on how much of it is revealed (i. e. illuminated) to the user. Unfortunately, ensuring that certain parts of a scene are lit does not make sure that the details will be communicated to the user, as the amount of illumination might be too small or too high. In this paper we define a metric to calculate the amount of information relative to an object that is effectively communicated to the user given a fixed camera position. This measure is based on an information-based concept, the Shannon entropy, and will be applied to the problem of automatic selection of light positions in order to adequately illuminate an object. In order to validate the results, we have carried out an experiment on users, this experiment helped us to explore other related measures.Preprin

    Visual analysis of research paper collections using normalized relative compression

    Get PDF
    The analysis of research paper collections is an interesting topic that can give insights on whether a research area is stalled in the same problems, or there is a great amount of novelty every year. Previous research has addressed similar tasks by the analysis of keywords or reference lists, with different degrees of human intervention. In this paper, we demonstrate how, with the use of Normalized Relative Compression, together with a set of automated data-processing tasks, we can successfully visually compare research articles and document collections. We also achieve very similar results with Normalized Conditional Compression that can be applied with a regular compressor. With our approach, we can group papers of different disciplines, analyze how a conference evolves throughout the different editions, or how the profile of a researcher changes through the time. We provide a set of tests that validate our technique, and show that it behaves better for these tasks than other techniques previously proposed.Peer ReviewedPostprint (published version

    Automatic view selection through depth-based view stability analysis

    No full text
    Although the real world is composed of three-dimensional objects, we communicate information using two-dimensional media. The initial 2D view we see of an object has great importance on how we perceive it. Deciding which of the all possible 2D representations of 3D objects communicates the maximum information to the user is still challenging, and it may be highly dependent on the addressed task. Psychophysical experiments have shown that three-quarter views (oblique views between frontal view and profile view) are often preferred as representative views for 3D objects; however, for most models, no knowledge of its proper orientation is provided. Our goal is the selection of informative views without any user intervention. In order to do so, we analyze some stability-based view descriptors and present a new one that computes view stability through the use of depth maps, without prior knowledge on the geometry or orientation of the object.We will show that it produces good views that, in most of the analyzed cases, are close to three-quarter views.Peer ReviewedPostprint (published version

    Accurate molecular atom selection in VR

    Get PDF
    Accurate selection in cluttered scenes is complex because a high amount of precision is required. In Virtual Reality Environments, it is even worse, because it is more difficult for us to point a small object with our arms in the air. Not only our arms move slightly, but the button/trigger press reduces our weak stability. In this paper, we present two alternatives to the classical ray pointing intended to facilitate the selection of atoms in molecular environments. We have implemented and analyzed such techniques through an informal user study and found that they were highly appreciated by the users. This selection method could be interesting in other crowded environments beyond molecular visualization.Peer ReviewedPostprint (published version

    Depth-enhanced maximum intensity projection

    Get PDF
    The two most common methods for the visualization of volumetric data are Direct Volume Rendering (DVR) and Maximum Intensity Projection (MIP). Direct Volume Rendering is superior to MIP in providing a larger amount of properly shaded details, because it employs a more complex shading model together with the use of user-defined transfer functions. However, the generation of adequate transfer functions is a laborious and time costly task, even for expert users. As a consequence, medical doctors often use MIP because it does not require the definition of complex transfer functions and because it gives good results on contrasted images. Unfortunately, MIP does not allow to perceive depth ordering and therefore spatial context is lost. In this paper we present a new approach to MIP rendering that uses depth and simple color blending to disambiguate the ordering of internal structures, while maintaining most of the details visible through MIP. It is usually faster than DVR and only requires the transfer function used by MIP rendering.Peer ReviewedPostprint (author’s final draft

    An empirical evaluation of document embeddings and similarity metrics for scientific articles

    Get PDF
    The comparison of documents—such as articles or patents search, bibliography recommendations systems, visualization of document collections, etc.—has a wide range of applications in several fields. One of the key tasks that such problems have in common is the evaluation of a similarity metric. Many such metrics have been proposed in the literature. Lately, deep learning techniques have gained a lot of popularity. However, it is difficult to analyze how those metrics perform against each other. In this paper, we present a systematic empirical evaluation of several of the most popular similarity metrics when applied to research articles. We analyze the results of those metrics in two ways, with a synthetic test that uses scientific papers and Ph.D. theses, and in a real-world scenario where we evaluate their ability to cluster papers from different areas of research.This research was funded by Project TIN2017-88515-C2-1-R funded by Ministerio de Economía y Competitividad, under MCIN/AEI/10.13039/501100011033/FEDER “A way to make Europe”.Peer ReviewedPostprint (published version

    Two-step techniques for accurate selection of small elements in VR environments

    Get PDF
    One of the key interactions in 3D environments is target acquisition, which can be challenging when targets are small or in cluttered scenes. Here, incorrect elements may be selected, leading to frustration and wasted time. The accuracy is further hindered by the physical act of selection itself, typically involving pressing a button. This action reduces stability, increasing the likelihood of erroneous target acquisition. We focused on molecular visualization and on the challenge of selecting atoms, rendered as small spheres. We present two techniques that improve upon previous progressive selection techniques. They facilitate the acquisition of neighbors after an initial selection, providing a more comfortable experience compared to using classical ray-based selection, particularly with occluded elements. We conducted a pilot study followed by two formal user studies. The results indicated that our approaches were highly appreciated by the participants. These techniques could be suitable for other crowded environments as well.This paper has been supported by TIN2017-88515-C2-1-R (GEN3DLIVE), from the Spanish Ministerio de Economía y Competitividad and PID2021-122136OB-C21 from the Ministerio de Ciencia e Innovación, Spain, by 839 FEDER (EU) funds. Elena Molina has been supported by FI-SDUR doctoral grant from Generalitat de Catalunya, and FPU grant from the Ministerio de Ciencia e Innovación, Spain .Peer ReviewedPostprint (published version

    Visual analysis of multi-labelled temporal noise data from multiple sensors

    Get PDF
    Environmental noise pollution is a problem for cities’ inhabitants, that can be especially severe in large cities. To implement measures that can alleviate this problem, it is necessary to understand the extent and impact of different noise sources. Although gathering data is relatively cheap, processing and analyzing the data is still complex. Besides the lack of an automatic method for labelling city sounds, maybe more important is the fact that there is not a tool that allows domain experts to analytically explore data that has been manually labelled. To solve this problem, we have created a visual analytics application that facilitates the exploration of multiple-labelled temporal data captured at four different corners of a crossing in a populated area of Barcelona, the Eixample neighborhood. Our tool consists of a series of linked interactive views that facilitate top-down (from noise events to labels) and bottom-up (from labels to time slots) exploration of the captured data.This project has been supported by grants TIN2017-88515-C2-1-R (GEN3DLIVE) from the Spanish Ministerio de Economía y Competitividad, by 839 FEDER (EU) funds, and PID2021-122136OB-C21 from MCIN/AEI/10.13039/501100011033/FEDER, EU.Peer ReviewedPostprint (published version

    Adaptive transfer functions: improved multiresolution visualization of medical models

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-016-1253-9Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512x512x2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.Peer ReviewedPostprint (author's final draft

    Real time falling leaves

    Get PDF
    There is a growing interest in simulating natural phenomena in computer graphics applications. Animating natural scenes in real time is one of the most challenging problems due to the inherent complexity of their structure, formed by millions of geometric entities, and the interactions that happen within. An example of natural scenario that is needed for games or simulation programs are forests. Forests are difficult to render because the huge amount of geometric entities and the large amount of detail to be represented. Moreover, the interactions between the objects (grass, leafs) and external forces such as wind are complex to model. In this paper we concentrate in the rendering of falling leafs at low cost. We present a technique that exploits graphics hardware in order to render thousands of leafs with different falling paths at real time and low memory requirements.Postprint (published version
    corecore